-
Notifications
You must be signed in to change notification settings - Fork 127
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: use Cubic CC by default #2295
Conversation
Firefox uses Cubic by default: ``` yaml - name: network.http.http3.cc_algorithm type: RelaxedAtomicUint32 value: 1 mirror: always rust: true ``` https://searchfox.org/mozilla-central/rev/f9517009d8a4946dbdd3acd72a31dc34fca79586/modules/libpref/init/StaticPrefList.yaml This commit updates Neqo to use Cubic instead of New Reno by default.
Failed Interop TestsQUIC Interop Runner, client vs. server, differences relative to 108fb8d. neqo-latest as client
neqo-latest as server
All resultsSucceeded Interop TestsQUIC Interop Runner, client vs. server neqo-latest as client
neqo-latest as server
Unsupported Interop TestsQUIC Interop Runner, client vs. server neqo-latest as client
neqo-latest as server
|
Cubic does not seem to grow the cwnd as fast as New Reno in congestion avoidance phase. More particularly, while our New Reno implementation increases its cwnd in each iteration by the SMSS, our Cubic implementation only does so in every second iteration. neqo/neqo-transport/src/connection/tests/cc.rs Lines 286 to 287 in 922d266
That is surprising to me. I expected Cubic to grow faster than New Reno in the beginning, given its concave increase early in congestion avoidance. I will give this more thought. |
That seems like a bug: https://datatracker.ietf.org/doc/html/rfc9438#section-1-5
|
My assessment above is wrong for the following reason:
Wrong.
Correct. BUT, while New Reno halves its congestion window on a congestion event: neqo/neqo-transport/src/cc/new_reno.rs Line 45 in 9e5a622
Cubic will reduce it by 30% only: neqo/neqo-transport/src/cc/cubic.rs Lines 23 to 25 in 9e5a622
neqo/neqo-transport/src/cc/cubic.rs Lines 206 to 207 in 9e5a622
Thus, within the 5 iterations of the test, Cubic does not grow its congestion window as fast as New Reno, because it starts off with a larger congestion window after the congestion event, i.e. has a head start. |
Aside: Is there any point at having |
I assume the split is due to its usage with the neqo/neqo-transport/src/cc/cubic.rs Lines 206 to 207 in 9e5a622
I don't have an opinion on whether it is worth the complexity it introduces. Given our work ahead (#1912) I suggest we keep it as is for now. |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #2295 +/- ##
==========================================
+ Coverage 95.29% 95.31% +0.01%
==========================================
Files 114 114
Lines 36868 36868
Branches 36868 36868
==========================================
+ Hits 35135 35141 +6
+ Misses 1727 1721 -6
Partials 6 6 ☔ View full report in Codecov by Sentry. |
Curious what the benchmark results will be. |
Benchmark resultsPerformance differences relative to e006a7d. decode 4096 bytes, mask ff: No change in performance detected.time: [11.834 µs 11.865 µs 11.905 µs] change: [-0.8811% -0.4026% +0.0021%] (p = 0.08 > 0.05) decode 1048576 bytes, mask ff: No change in performance detected.time: [2.8828 ms 2.8906 ms 2.8999 ms] change: [-0.8223% -0.3308% +0.1434%] (p = 0.19 > 0.05) decode 4096 bytes, mask 7f: No change in performance detected.time: [19.746 µs 19.788 µs 19.835 µs] change: [-0.3915% -0.0412% +0.3512%] (p = 0.84 > 0.05) decode 1048576 bytes, mask 7f: No change in performance detected.time: [5.0724 ms 5.0835 ms 5.0960 ms] change: [-0.2659% +0.0593% +0.3919%] (p = 0.73 > 0.05) decode 4096 bytes, mask 3f: No change in performance detected.time: [6.9124 µs 6.9494 µs 6.9911 µs] change: [-0.0053% +0.4004% +0.9021%] (p = 0.08 > 0.05) decode 1048576 bytes, mask 3f: No change in performance detected.time: [1.4152 ms 1.4195 ms 1.4251 ms] change: [-0.5778% +0.0009% +0.5856%] (p = 0.99 > 0.05) coalesce_acked_from_zero 1+1 entries: No change in performance detected.time: [98.771 ns 99.125 ns 99.477 ns] change: [-0.6985% -0.2234% +0.2346%] (p = 0.35 > 0.05) coalesce_acked_from_zero 3+1 entries: No change in performance detected.time: [116.51 ns 116.75 ns 117.03 ns] change: [-0.6398% -0.2113% +0.1820%] (p = 0.32 > 0.05) coalesce_acked_from_zero 10+1 entries: No change in performance detected.time: [116.17 ns 116.50 ns 116.92 ns] change: [-0.7111% -0.2069% +0.2198%] (p = 0.41 > 0.05) coalesce_acked_from_zero 1000+1 entries: No change in performance detected.time: [97.802 ns 97.917 ns 98.051 ns] change: [-1.2613% -0.2637% +0.6844%] (p = 0.62 > 0.05) RxStreamOrderer::inbound_frame(): Change within noise threshold.time: [111.34 ms 111.38 ms 111.43 ms] change: [-0.4256% -0.3652% -0.3042%] (p = 0.00 < 0.05) SentPackets::take_ranges: No change in performance detected.time: [5.4828 µs 5.6345 µs 5.7947 µs] change: [-15.342% -4.0164% +4.2355%] (p = 0.64 > 0.05) transfer/pacing-false/varying-seeds: Change within noise threshold.time: [42.882 ms 42.963 ms 43.044 ms] change: [+2.8088% +3.0786% +3.3434%] (p = 0.00 < 0.05) transfer/pacing-true/varying-seeds: Change within noise threshold.time: [42.934 ms 43.004 ms 43.075 ms] change: [+2.5597% +2.7924% +3.0366%] (p = 0.00 < 0.05) transfer/pacing-false/same-seed: 💔 Performance has regressed.time: [42.714 ms 42.797 ms 42.879 ms] change: [+3.1645% +3.4224% +3.6685%] (p = 0.00 < 0.05) transfer/pacing-true/same-seed: Change within noise threshold.time: [42.941 ms 43.012 ms 43.085 ms] change: [+1.9150% +2.1313% +2.3645%] (p = 0.00 < 0.05) 1-conn/1-100mb-resp/mtu-1504 (aka. Download)/client: No change in performance detected.time: [900.42 ms 909.73 ms 919.34 ms] thrpt: [108.77 MiB/s 109.92 MiB/s 111.06 MiB/s] change: time: [-2.3569% -0.8115% +0.6345%] (p = 0.28 > 0.05) thrpt: [-0.6305% +0.8181% +2.4138%] 1-conn/10_000-parallel-1b-resp/mtu-1504 (aka. RPS)/client: No change in performance detected.time: [315.05 ms 317.21 ms 319.44 ms] thrpt: [31.305 Kelem/s 31.525 Kelem/s 31.741 Kelem/s] change: time: [-0.9305% +0.0081% +0.9152%] (p = 0.99 > 0.05) thrpt: [-0.9069% -0.0081% +0.9392%] 1-conn/1-1b-resp/mtu-1504 (aka. HPS)/client: 💚 Performance has improved.time: [33.863 ms 34.036 ms 34.223 ms] thrpt: [29.220 elem/s 29.380 elem/s 29.531 elem/s] change: time: [-3.3477% -2.6772% -1.9981%] (p = 0.00 < 0.05) thrpt: [+2.0388% +2.7509% +3.4637%] 1-conn/1-100mb-resp/mtu-1504 (aka. Upload)/client: No change in performance detected.time: [1.6592 s 1.6762 s 1.6933 s] thrpt: [59.056 MiB/s 59.659 MiB/s 60.268 MiB/s] change: time: [-2.9356% -1.4871% +0.0531%] (p = 0.05 > 0.05) thrpt: [-0.0531% +1.5095% +3.0244%] Client/server transfer resultsTransfer of 33554432 bytes over loopback.
|
Firefox uses Cubic by default:
https://searchfox.org/mozilla-central/rev/f9517009d8a4946dbdd3acd72a31dc34fca79586/modules/libpref/init/StaticPrefList.yaml
This commit updates Neqo to use Cubic instead of New Reno by default.